Goto

Collaborating Authors

 Phelps County


Stitching Satellites to the Edge: Pervasive and Efficient Federated LEO Satellite Learning

arXiv.org Artificial Intelligence

In the ambitious realm of space AI, the integration of federated learning (FL) with low Earth orbit (LEO) satellite constellations holds immense promise. However, many challenges persist in terms of feasibility, learning efficiency, and convergence. These hurdles stem from the bottleneck in communication, characterized by sporadic and irregular connectivity between LEO satellites and ground stations, coupled with the limited computation capability of satellite edge computing (SEC). This paper proposes a novel FL-SEC framework that empowers LEO satellites to execute large-scale machine learning (ML) tasks onboard efficiently. Its key components include i) personalized learning via divide-and-conquer, which identifies and eliminates redundant satellite images and converts complex multi-class classification problems to simple binary classification, enabling rapid and energy-efficient training of lightweight ML models suitable for IoT/edge devices on satellites; ii) orbital model retraining, which generates an aggregated "orbital model" per orbit and retrains it before sending to the ground station, significantly reducing the required communication rounds. We conducted experiments using Jetson Nano, an edge device closely mimicking the limited compute on LEO satellites, and a real satellite dataset. The results underscore the effectiveness of our approach, highlighting SEC's ability to run lightweight ML models on real and high-resolution satellite imagery. Our approach dramatically reduces FL convergence time by nearly 30 times, and satellite energy consumption down to as low as 1.38 watts, all while maintaining an exceptional accuracy of up to 96%.


Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

arXiv.org Artificial Intelligence

Bias evaluation in machine-learning based services (MLS) based on traditional algorithmic fairness notions that rely on comparative principles is practically difficult, making it necessary to rely on human auditor feedback. However, in spite of taking rigorous training on various comparative fairness notions, human auditors are known to disagree on various aspects of fairness notions in practice, making it difficult to collect reliable feedback. This paper offers a paradigm shift to the domain of algorithmic fairness via proposing a new fairness notion based on the principle of non-comparative justice. In contrary to traditional fairness notions where the outcomes of two individuals/groups are compared, our proposed notion compares the MLS' outcome with a desired outcome for each input. This desired outcome naturally describes a human auditor's expectation, and can be easily used to evaluate MLS on crowd-auditing platforms. We show that any MLS can be deemed fair from the perspective of comparative fairness (be it in terms of individual fairness, statistical parity, equal opportunity or calibration) if it is non-comparatively fair with respect to a fair auditor. We also show that the converse holds true in the context of individual fairness. Given that such an evaluation relies on the trustworthiness of the auditor, we also present an approach to identify fair and reliable auditors by estimating their biases with respect to a given set of sensitive attributes, as well as quantify the uncertainty in the estimation of biases within a given MLS. Furthermore, all of the above results are also validated on COMPAS, German credit and Adult Census Income datasets. In recent years, the rapid advancements in the fields of artificial intelligence (AI) and machine learning (ML) have resulted in the proliferation of algorithmic decision making in many practical applications. Examples include decision-support systems for judges whether or not to release a prisoner on parole [1], automated financial decisions in banks regarding granting or denying loans [2], and product recommendations by e-commerce websites [3].


Fast PDN Impedance Prediction Using Deep Learning

arXiv.org Artificial Intelligence

Modeling and simulating a power distribution network (PDN) for printed circuit boards (PCBs) with irregular board shapes and multi-layer stackup is computationally inefficient using full-wave simulations. This paper presents a new concept of using deep learning for PDN impedance prediction. A boundary element method (BEM) is applied to efficiently calculate the impedance for arbitrary board shape and stackup. Then over one million boards with different shapes, stackup, IC location, and decap placement are randomly generated to train a deep neural network (DNN). The trained DNN can predict the impedance accurately for new board configurations that have not been used for training. The consumed time using the trained DNN is only 0.1 seconds, which is over 100 times faster than the BEM method and 5000 times faster than full-wave simulations.


On the Design of Strategic Task Recommendations for Sustainable Crowdsourcing-Based Content Moderation

arXiv.org Artificial Intelligence

Crowdsourcing-based content moderation is a platform that hosts content moderation tasks for crowd workers to review user submissions (e.g. text, images and videos) and make decisions regarding the admissibility of the posted content, along with a gamut of other tasks such as image labeling and speech-to-text conversion. In an attempt to reduce cognitive overload at the workers and improve system efficiency, these platforms offer personalized task recommendations according to the worker's preferences. However, the current state-of-the-art recommendation systems disregard the effects on worker's mental health, especially when they are repeatedly exposed to content moderation tasks with extreme content (e.g. violent images, hate-speech). In this paper, we propose a novel, strategic recommendation system for the crowdsourcing platform that recommends jobs based on worker's mental status. Specifically, this paper models interaction between the crowdsourcing platform's recommendation system (leader) and the worker (follower) as a Bayesian Stackelberg game where the type of the follower corresponds to the worker's cognitive atrophy rate and task preferences. We discuss how rewards and costs should be designed to steer the game towards desired outcomes in terms of maximizing the platform's productivity, while simultaneously improving the working conditions of crowd workers.


Incremental Cluster Validity Indices for Hard Partitions: Extensions and Comparative Study

arXiv.org Machine Learning

V alidation is one of the most important aspects of clustering, but most approaches have been batch methods. Recently, interest has grown in providing incremental alternatives. This paper extends the incremental cluster validity index (iCVI) family to include incremental versions of Calinski-Harabasz (iCH), I index and Pakhira-Bandyopadhyay-Maulik (iI and iPBM), Silhouette (iSIL), Negentropy Increment (iNI), Representative Cross Information Potential (irCIP) and Representative Cross Entropy (irH), and Conn Index (iConn Index). Additionally, the effect of under-and over-partitioning on the behavior of these six iCVIs, the Partition Separation (PS) index, as well as two other recently developed iCVIs (incremental Xie-Beni (iXB) and incremental Davies-Bouldin (iDB)) was examined through a comparative study. Experimental results using fuzzy adaptive resonance theory (ART)-based clustering methods showed that while evidence of most under-partitioning cases could be inferred from the behaviors of all these iCVIs, over-partitioning was found to be a more challenging scenario indicated only by the iConn Index. The expansion of incremental validity indices provides significant novel opportunities for assessing and interpreting the results of unsupervised learning. L. E. Brito da Silva is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA, and also with the CAPES Foundation, Ministry of Education of Brazil, Bras ılia, DF 70040-020, Brazil (email: leonardoenzo@ieee.org). N. M. Melton is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA (email: niklasmelton@ieee.org). D. C. Wunsch II is with the Applied Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409 USA (email: wunsch@ieee.org). I NTRODUCTION Cluster validation [1] is a critical topic in cluster analysis.


Calendar of Events

AI Magazine

All accepted papers will appear in the conference proceedings published by AAAI Press. Selected authors will be invited to submit extended versions of their Ingrid Russell, University of Hartford papers to a special issue of the International Journal on Artificial Intelligence Tools irussell@hartford.edu The papers Valerie Barr, Hofstra University should not exceed 5 pages and is due by October 24, 2003. All submissions will be done Zdravko Markov, Central Connecticut State electronically via FLAIRS web submission system, which will be available through University the conference website. Please consult the conference web page for details on paper submission.